36 research outputs found

    Ring-LWE:applications to cryptography and their efficient realization

    Get PDF
    © Springer International Publishing AG 2016. The persistent progress of quantum computing with algorithms of Shor and Proos and Zalka has put our present RSA and ECC based public key cryptosystems at peril. There is a flurry of activity in cryptographic research community to replace classical cryptography schemes with their post-quantum counterparts. The learning with errors problem introduced by Oded Regev offers a way to design secure cryptography schemes in the post-quantum world. Later for efficiency LWE was adapted for ring polynomials known as Ring-LWE. In this paper we discuss some of these ring-LWE based schemes that have been designed. We have also drawn comparisons of different implementations of those schemes to illustrate their evolution from theoretical proposals to practically feasible schemes

    Sublinear bounds on the distinguishing advantage for multiple samples

    Get PDF
    The maximal achievable advantage of a (computationally unbounded) distinguisher to determine whether a source Z is distributed according to distribution P:0P:0 or P:1P:1, when given access to one sample of Z, is characterized by the statistical distance d(P:0,P1)d(P:0,P_1). Here, we study the distinguishing advantage when given access to several i.i.d. samples of Z. For n samples, the advantage is then naturally given by d(P:0⊗n,P1⊗n)d(P:0^{\otimes n},P_1^{\otimes n}), which can be bounded as d(P:0⊗n,P1⊗n)≀n⋅d(P0,P1)d(P:0^{\otimes n},P_1^{\otimes n}) \le n \cdot d(P_0,P_1). This bound is tight for some choices of P:0P:0 and P:1P:1; thus, in general, a linear increase in the distinguishing advantage is unavoidable. In this work, we show new and improved bounds on d(P:0⊗n,P1⊗n)d(P:0^{\otimes n},P_1^{\otimes n}) that circumvent the above pessimistic observation. Our bounds assume, necessarily, certain additional information on P:0P:0 and/or P:1P:1 beyond, or instead of, a bound on d(P:0,P1)d(P:0,P_1); in return, the bounds grow as n\sqrt{n}, rather than linearly in n. Thus, whenever applicable, our bounds show that the number of samples necessary to distinguish the two distributions is substantially larger than what the standard bound would suggest. Such bounds have already been suggested in previous literature, but our new bounds are more general and (partly) stronger, and thus applicable to a larger class of instances. In a second part, we extend our results to a modified setting, where the distinguisher only has indirect access to the source Z. By this we mean that instead of obtaining samples of Z, the distinguisher now obtains i.i.d. samples that are chosen according to a probability distribution that depends on the (one) value produced by the source Z. Finally, we offer applications of our bounds to the area of cryptography. We show on a few examples from the cryptographic literature how our bounds give rise to improved results. For instance, importing our bounds into the analyses of Blondeau et al. for the security of block ciphers against multidimensional linear and truncated differential attacks, we obtain immediate improvements to their results

    Improving Speed of Dilithium’s Signing Procedure

    Get PDF
    Dilithium is a round 2 candidate for digital signature schemes in NIST initiative for post-quantum cryptographic schemes. Since Dilithium is built upon the “Fiat Shamir with Aborts” framework, its signing procedure performs rejection sampling of its signatures to ensure they do not leak information about the secret key. Thus, the signing procedure is iterative in nature with a number of rejected iterations, which serve as unnecessary overheads hampering its overall performance. As a first contribution, we propose an optimization that reduces the computations in the rejected iterations through early-evaluation of the conditional checks. This allows to perform an early detection of the rejection condition and reject a given iteration as early as possible. We also incorporate a number of standard optimizations such as unrolling and inlining to further improve the speed of the signing procedure. We incorporate and evaluate our optimizations over the software implementation of Dilithium on both the Intel Core i5-4460 and ARM Cortex-M4 CPUs. As a second contribution, we identify opportunities to present a more refined evaluation of Dilithium’s signing procedure in several scenarios where pre-computations can be carried out. We also evaluate the performance of our optimizations and the memory requirements for the pre-computed intermediates in the considered scenarios. We could yield speed-ups in the range of 6% upto 35%, considering all the aforementioned scenarios, thus presenting the fastest software implementation of Dilithium till date

    An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices

    Get PDF
    In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving p within half a day a LWE instance with dimension n = 128, modulus q=n2q = n^2, Gaussian noise α=1/(n/πlog⁥2n)\alpha = 1/(\sqrt{n/\pi} \log^2 n) and binary secret, using 2282^{28} samples, while the previous best result based on BKW claims a time complexity of 2742^{74} with 2602^{60} samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time 2(ln⁥2/2+o(1))n/log⁥log⁥n2^{(\ln 2/2+o(1))n/\log \log n}. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density o(1)o(1), which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201

    Compact Ring-LWE Cryptoprocessor

    Full text link
    Abstract. In this paper we propose an efficient and compact processor for a ring-LWE based encryption scheme. We present three optimizations for the Num-ber Theoretic Transform (NTT) used for polynomial multiplication: we avoid pre-processing in the negative wrapped convolution by merging it with the main algo-rithm, we reduce the fixed computation cost of the twiddle factors and propose an advanced memory access scheme. These optimization techniques reduce both the cycle and memory requirements. Finally, we also propose an optimization of the ring-LWE encryption system that reduces the number of NTT operations from five to four resulting in a 20 % speed-up. We use these computational optimiza-tions along with several architectural optimizations to design an instruction-set ring-LWE cryptoprocessor. For dimension 256, our processor performs encryp-tion/decryption operations in 20/9 ”s on a Virtex 6 FPGA and only requires 1349 LUTs, 860 FFs, 1 DSP-MULT and 2 BRAMs. Similarly for dimension 512, the processor takes 48/21 ”s for performing encryption/decryption operations and only requires 1536 LUTs, 953 FFs, 1 DSP-MULT and 3 BRAMs. Our pro-cessors are therefore more than three times smaller than the current state of the art hardware implementations, whilst running somewhat faster

    Single-Trace Side-Channel Attacks on Masked Lattice-Based Encryption

    Get PDF
    Although lattice-based cryptography has proven to be a particularly efficient approach to post-quantum cryptography, its security against side-channel attacks is still a very open topic. There already exist some first works that use masking to achieve DPA security. However, for public-key primitives SPA attacks that use just a single trace are also highly relevant. For lattice-based cryptography this implementation-security aspect is still unexplored. In this work, we present the first single-trace attack on lattice-based encryption. As only a single side-channel observation is needed for full key recovery, it can also be used to attack masked implementations. We use leakage coming from the Number Theoretic Transform, which is at the heart of almost all efficient lattice-based implementations. This means that our attack can be adapted to a large range of other lattice-based constructions and their respective implementations. Our attack consists of 3 main steps. First, we perform a template matching on all modular operations in the decryption process. Second, we efficiently combine all this side-channel information using belief propagation. And third, we perform a lattice-decoding to recover the private key. We show that the attack allows full key recovery not only in a generic noisy Hamming-weight setting, but also based on real traces measured on an ARM Cortex-M4F microcontroller

    Sampling the Integers with Low Relative Error

    Get PDF
    Randomness is an essential part of any secure cryptosystem, but many constructions rely on distributions that are not uniform. This is particularly true for lattice based cryptosystems, which more often than not make use of discrete Gaussian distributions over the integers. For practical purposes it is crucial to evaluate the impact that approximation errors have on the security of a scheme to provide the best possible trade-off between security and performance. Recent years have seen surprising results allowing to use relatively low precision while maintaining high levels of security. A key insight in these results is that sampling a distribution with low relative error can provide very strong security guarantees. Since floating point numbers provide guarantees on the relative approximation error, they seem a suitable tool in this setting, but it is not obvious which sampling algorithms can actually profit from them. While previous works have shown that inversion sampling can be adapted to provide a low relative error (Pöppelmann et al., CHES 2014; Prest, ASIACRYPT 2017), other works have called into question if this is possible for other sampling techniques (Zheng et al., Eprint report 2018/309). In this work, we consider all sampling algorithms that are popular in the cryptographic setting and analyze the relationship of floating point precision and the resulting relative error. We show that all of the algorithms either natively achieve a low relative error or can be adapted to do so

    Short, Invertible Elements in Partially Splitting Cyclotomic Rings and Applications to Lattice-Based Zero-Knowledge Proofs

    Get PDF
    When constructing practical zero-knowledge proofs based on the hardness of the Ring-LWE or the Ring-SIS problems over polynomial rings Zp[X]/(Xn+1)Z_p[X]/(X^n+1), it is often necessary that the challenges come from a set C\mathcal{C} that satisfies three properties: the set should be large (around 22562^{256}), the elements in it should have small norms, and all the non-zero elements in the difference set C−C\mathcal{C}-\mathcal{C} should be invertible. The first two properties are straightforward to satisfy, while the third one requires us to make efficiency compromises. We can either work over rings where the polynomial Xn+1X^n+1 only splits into two irreducible factors modulo pp, which makes the speed of the multiplication operation in the ring sub-optimal; or we can limit our challenge set to polynomials of smaller degree, which requires them to have (much) larger norms. In this work we show that one can use the optimal challenge sets C\mathcal{C} and still have the polynomial Xn+1X^n+1 split into more than two factors. This comes as a direct application of our more general result that states that all non-zero polynomials with ``small\u27\u27 coefficients in the cyclotomic ring Zp[X]/(Ωm(X))Z_p[X]/(\Phi_m(X)) are invertible (where ``small\u27\u27 depends on the size of pp and how many irreducible factors the mthm^{th} cyclotomic polynomial Ωm(X)\Phi_m(X) splits into). We furthermore establish sufficient conditions for pp under which Ωm(X)\Phi_m(X) will split in such fashion. For the purposes of implementation, if the polynomial Xn+1X^n+1 splits into kk factors, we can run FFT for log⁡k\log{k} levels until switching to Karatsuba multiplication. Experimentally, we show that increasing the number of levels from one to three or four results in a speedup by a factor of ≈2\approx 2 -- 33. We point out that this improvement comes completely for free simply by choosing a modulus pp that has certain algebraic properties. In addition to the speed improvement, having the polynomial split into many factors has other applications -- e.g. when one embeds information into the Chinese Remainder representation of the ring elements, the more the polynomial splits, the more information one can embed into an element

    Mechanism of PP2A-mediated IKKÎČ dephosphorylation: a systems biological approach

    Get PDF
    BACKGROUND: Biological effects of nuclear factor-kappaB (NF kappaB) can differ tremendously depending on the cellular context. For example, NF kappaB induced by interleukin-1 (IL-1) is converted from an inhibitor of death receptor induced apoptosis into a promoter of ultraviolet-B radiation (UVB)-induced apoptosis. This conversion requires prolonged NF kappaB activation and is facilitated by IL-1 + UVB-induced abrogation of the negative feedback loop for NF kappaB, involving a lack of inhibitor of kappaB (I kappaB alpha) protein reappearance. Permanent activation of the upstream kinase IKK beta results from UVB-induced inhibition of the catalytic subunit of Ser-Thr phosphatase PP2A (PP2Ac), leading to immediate phosphorylation and degradation of newly synthesized I kappaB alpha. RESULTS: To investigate the mechanism underlying the general PP2A-mediated tuning of IKK beta phosphorylation upon IL-1 stimulation, we have developed a strictly reduced mathematical model based on ordinary differential equations which includes the essential processes concerning the IL-1 receptor, IKK beta and PP2A. Combining experimental and modelling approaches we demonstrate that constitutively active, but not post-stimulation activated PP2A, tunes out IKK beta phosphorylation thus allowing for I kappaB alpha resynthesis in response to IL-1. Identifiability analysis and determination of confidence intervals reveal that the model allows reliable predictions regarding the dynamics of PP2A deactivation and IKK beta phosphorylation. Additionally, scenario analysis is used to scrutinize several hypotheses regarding the mode of UVB-induced PP2Ac inhibition. The model suggests that down regulation of PP2Ac activity, which results in prevention of I kappaB alpha reappearance, is not a direct UVB action but requires instrumentality. CONCLUSION: The model developed here can be used as a reliable building block of larger NF kappa B models and offers comprehensive simplification potential for future modeling of NF kappa B signaling. It gives more insight into the newly discovered mechanisms for IKK deactivation and allows for substantiated predictions and investigation of different hypotheses. The evidence of constitutive activity of PP2Ac at the IKK complex provides new insights into the feedback regulation of NF kappa B, which is crucial for the development of new anti-cancer strategies

    Simple Lattice Trapdoor Sampling from a Broad Class of Distributions

    Get PDF
    International audienceAt the center of many lattice-based constructions is an algorithm that samples a short vector s, satisfying [A|AR − HG]s = t mod q where A, AR, H, G are public matrices and R is a trapdoor. Although the algorithm crucially relies on the knowledge of the trapdoor R to perform this sampling efficiently, the distribution it outputs should be independent of R given the public values. We present a new, simple algorithm for performing this task. The main novelty of our sampler is that the distribution of s does not need to be Gaussian, whereas all previous works crucially used the properties of the Gaussian distribution to produce such an s. The advantage of using a non-Gaussian distribution is that we are able to avoid the high-precision arithmetic that is inherent in Gaussian sampling over arbitrary lattices. So while the norm of our output vector s is on the order of √ n to n-times larger (the representation length, though, is only a constant factor larger) than in the samplers of Gentry, Peikert, Vaikuntanathan (STOC 2008) and Micciancio, Peikert (EUROCRYPT 2012), the sampling itself can be done very efficiently. This provides a useful time/output trade-off for devices with constrained computing power. In addition, we believe that the conceptual simplicity and generality of our algorithm may lead to it finding other applications
    corecore